algorithmic accountability

Terms from Artificial Intelligence: humans at the heart of algorithms

Page numbers are for draft copy at present; they will be replaced with correct numbers when final book is formatted. Chapter numbers are correct and will not change now.

Algorithmic accountablity concerns the question of who is responsible, legally and ethically, when an algorithm in general, or AI in pareticular, goes wrong. In market-based economies, if companies (and their insurers) know that they will be sued or even be subject to criminal prosecution if the AI fails, then it is assumed they will be procatuve in creating safe and appropriate software. More explicit regulation is usually implemented too slowly for fast miving technology, and may hamper innovation. It may therfore be argued, that ensuring algorithmic accountablity is a more effective and even democratic option.

Used on Chap. 19: page 485; Chap. 20: page 487